fromInfoWorld
4 days agoAutomated data poisoning proposed as a solution for AI theft threat
Researchers have developed a tool that they say can make stolen high-value proprietary data used in AI systems useless, a solution that CSOs may have to adopt to protect their sophisticated large language models (LLMs). The technique, created by researchers from universities in China and Singapore, is to inject plausible but false data into what's known as a knowledge graph (KG) created by an AI operator. A knowledge graph holds the proprietary data used by the LLM.
Information security





























